50 research outputs found

    Reformulation in planning

    Get PDF
    Reformulation of a problem is intended to make the problem more amenable to efficient solution. This is equally true in the special case of reformulating a planning problem. This paper considers various ways in which reformulation can be exploited in planning

    Reading handwritten digits: a ZIP code recognition system

    Get PDF
    A neural network algorithm-based system that reads handwritten ZIP codes appearing on real US mail is described. The system uses a recognition-based segmenter, that is a hybrid of connected-components analysis (CCA), vertical cuts, and a neural network recognizer. Connected components that are single digits are handled by CCA. CCs that are combined or dissected digits are handled by the vertical-cut segmenter. The four main stages of processing are preprocessing, in which noise is removed and the digits are deslanted, CCA segmentation and recognition, vertical-cut-point estimation and segmentation, and directly lookup. The system was trained and tested on approximately 10000 images, five- and nine-digit ZIP code fields taken from real mail

    Methods for classically simulating noisy networked quantum architectures

    Get PDF
    As research on building scalable quantum computers advances, it is important to be able to certify their correctness. Due to the exponential hardness of classically simulating quantum computation, straight-forward verification through classical simulation fails. However, we can classically simulate small scale quantum computations and hence we are able to test that devices behave as expected in this domain. This constitutes the first step towards obtaining confidence in the anticipated quantum-advantage when we extend to scales which can no longer be simulated. Realistic devices have restrictions due to their architecture and limitations due to physical imperfections and noise. Here we extend the usual ideal simulations by considering those effects. We provide a general methodology for constructing realistic simulations emulating the physical system which will both provide a benchmark for realistic devices, and guide experimental research in the quest for quantum-advantage. We exemplify our methodology by simulating a networked architecture and corresponding noise-model; in particular that of the device developed in the Networked Quantum Information Technologies Hub (NQIT). For our simulations we use, with suitable modification, the classical simulator of of Bravyi and Gosset. The specific problems considered belong to the class of Instantaneous Quantum Polynomial-time (IQP) problems, a class believed to be hard for classical computing devices, and to be a promising candidate for the first demonstration of quantum-advantage. We first consider a subclass of IQP, defined by Bermejo-Vega et al, involving two-dimensional dynamical quantum simulators, before moving to more general instances of IQP, but which are still restricted to the architecture of NQIT.Comment: 55 pages, 16 figure

    Portrait de l'utilisation des pesticides en milieu urbain

    No full text
    L'étude suivante peint le portrait de l'utilisation des pesticides en milieu urbain. Elle se divise en trois volets. Le premier volet trace le portrait des habitudes reliées à l'utilisation des pesticides par les citoyens de la municipalité de Fleurimont en Estrie. Afin d'avoir un aperçu général de cette utilisation à plus grande échelle, l'étude de Fleurimont a été comparé à deux autres études similaires d'importance au Québec. Parmi les résultats observés, on note que les citoyens sont trÚs soucieux de l'aspect esthétique de leur terrain, que l'emploi de pesticides chimiques est élevé et qu'ils connaissent peu les produits qu'ils emploient. Le deuxiÚme volet trace un portrait chez les commerçants de Sherbrooke et ses banlieues qui font la vente de tels produits. On y voit que les ventes de pesticides (chimiques et biologiques) ont augmentées, qu'il y a peu de documentation objective disponible sur les produits vendus et que les consommateurs n'hésitent pas à demander de l'information. Enfin, le troisiÚme volet analyse le comportement des municipalités de la MRC de Sherbrooke. On y retient que deux municipalités de la MRC ont recours aux pesticides, qu'il y a peu de sensibilisation faite par les municipalités pour informer le citoyen sur la problématique des pesticides et qu'aucune ne réglemente l'usage de ces produits

    General Terms

    No full text
    A methodology for embedding predictive modeling algorithms in a commercial parallel database is described; specifically, the parallel editions of IBM DB2 Universal Database, although many aspects of the overall approach can be used with other commercial parallel databases. This parallelization approach was implemented in the Version 8.2 release of DB2 Intelligent Miner Modeling to support a new predictive modeling algorithm called Transform Regression. This database-embedded mining algorithm provides all the usual benefits, including easier integration into large enterprise applications, the ability to perform entire data mining workflows directly from an SQL-based programming interface, reduced data transfer costs between the database and the data mining application, and faster, parallel data access during query processing. However, in addition to the these benefits, a significant part of the data mining computations are also parallelized without the use of any sophisticated parallel programming constructs, or any specialized message passing and parallel synchronization libraries

    Data Mining with Extended Symbolic Models

    No full text
    Symbolic modeling of data with decision trees or decision rules has a certain appeal to data mining application developers. The computationally efficient nature of the modeling methodology, and the inbuilt explanatory nature of the models that are generated, are two often cited reasons for the preferred use of these methods. Traditionally, the applications of these methods had been restricted to classification modeling. Recent extensions to these methods employing ideas from statistics and machine learning have resulted in more general frameworks that continue to exhibit the underlying characteristics but apply to a much wider class of applications. These extended symbolic modeling methodologies permit exciting new application avenues, including probabilistic modeling, text mining, and integrating data mining into knowledge-based frameworks. Highlights of work in this area in the data abstraction research group at IBM's T.J. Watson ResearchCenter will be presented

    Probabilistic estimation based data mining for discovering insurance risks

    No full text
    The UPA (Underwriting Profitability Analysis) application embodies a new approach to mining Property & Casualty (P&C) insurance policy and claims data for the purpose of constructing predictive models for insurance risks. UPA utilizes the ProbE (Probabilistic Estimation) predictive modeling data mining kernel to discover risk characterization rules by analyzing large and noisy insurance data sets. Each rule defines a distinct risk group and its level of risk. To satisfy regulatory constraints, the risk groups are mutually exclusive and exhaustive. The rules generated by ProbE are statisticallyrigorous, interpretable, and credible from an actuarial standpoint. Our approach to modeling insurance risks and the implementation of that approach have been validated in an actual engagement with a P&C firm. The benefit assessment of the results suggest that this methodology provides significant value to the P&C insurance risk management process

    Decomposition of heterogeneous classification problems

    No full text
    corecore